Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Database
Language
Document Type
Year range
1.
14th International Conference on Information Technology and Electrical Engineering, ICITEE 2022 ; : 253-258, 2022.
Article in English | Scopus | ID: covidwho-2191881

ABSTRACT

The spread of Covid-19 virus occurs quickly, one of which is through sneezing and coughing (drops or saliva). This becomes an infectious material for dentists in carrying out dental care during the pandemic. Extra-oral suction (EOS) is a device for sucking the patient's Aerosol when performing surgery or dental treatment, but the nozzle of the suction device is still manually moved, therefore the position of the EOS nozzle is right above the patient's mouth. This allows for some aerosol particles that have not been inhaled by the device when the patient turns or changes his head position. In this paper, Visual Servoing (VS) is needed, which is an approach to guide a robot using visual information. Image processing (face and mouth detection), and controls are combined to be able to move or change the position of the nozzle automatically according to the position and direction of the patient's mouth. The human face and mouth openness pose detection can be done using the Haar-cascade method and the Adaptive Boosting (AdaBoost) algorithm. This system is expected to optimize performance in dentist operations and minimize the transmission of the Covid-19 virus. © 2022 IEEE.

2.
23rd International Electronics Symposium, IES 2021 ; : 173-178, 2021.
Article in English | Scopus | ID: covidwho-1550749

ABSTRACT

Lately, during the COVID-19 pandemic, hospitals experienced an increase in the number of patients due to the rapid spread of the virus. The need for services in hospitals has increased compared to normal days. Therefore Healthcare Robot is needed that can help the service of patients and medical personnel in the hospital. The robot must be able to detect and recognize existing objects and put them in the expected place. The sensor itself here uses a camera depth or stereo camera. The input results are in the form of RGB-D Image, which we then convert to point cloud to get 3D information. Then the 3D information will be segmented and clustered to get the object to be detected using a RANSAC and Euclidean Cluster. Then feature extraction uses the Viewpoint Features Histogram (VFH) descriptor to get the characteristics of the object. Then the matching with the dataset using the Artificial Neural Network continued with Labelling and visualization of the result. With this system, the robot can detect and recognize objects around the hospital so that the robot can take action on these objects. At the end of this project, nine datasets and three scenes resulting from capture by the writer were tested. The results show an average accuracy of 90.77% for testing three multi-object scenes and 98.73% for testing one object. © 2021 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL